58 research outputs found

    The fully-implicit log-conformation formulation and its application to three-dimensional flows

    Full text link
    The stable and efficient numerical simulation of viscoelastic flows has been a constant struggle due to the High Weissenberg Number Problem. While the stability for macroscopic descriptions could be greatly enhanced by the log-conformation method as proposed by Fattal and Kupferman, the application of the efficient Newton-Raphson algorithm to the full monolithic system of governing equations, consisting of the log-conformation equations and the Navier-Stokes equations, has always posed a problem. In particular, it is the formulation of the constitutive equations by means of the spectral decomposition that hinders the application of further analytical tools. Therefore, up to now, a fully monolithic approach could only be achieved in two dimensions, as, e.g., recently shown in [P. Knechtges, M. Behr, S. Elgeti, Fully-implicit log-conformation formulation of constitutive laws, J. Non-Newtonian Fluid Mech. 214 (2014) 78-87]. The aim of this paper is to find a generalization of the previously made considerations to three dimensions, such that a monolithic Newton-Raphson solver based on the log-conformation formulation can be implemented also in this case. The underlying idea is analogous to the two-dimensional case, to replace the eigenvalue decomposition in the constitutive equation by an analytically more "well-behaved" term and to rely on the eigenvalue decomposition only for the actual computation. Furthermore, in order to demonstrate the practicality of the proposed method, numerical results of the newly derived formulation are presented in the case of the sedimenting sphere and ellipsoid benchmarks for the Oldroyd-B and Giesekus models. It is found that the expected quadratic convergence of Newton's method can be achieved.Comment: 21 pages, 9 figure

    Fully-implicit log-conformation formulation of constitutive laws

    Full text link
    Subject of this paper is the derivation of a new constitutive law in terms of the logarithm of the conformation tensor that can be used as a full substitute for the 2D governing equations of the Oldroyd-B, Giesekus and other models. One of the key features of these new equations is that - in contrast to the original log-conf equations given by Fattal and Kupferman (2004) - these constitutive equations combined with the Navier-Stokes equations constitute a self-contained, non-iterative system of partial differential equations. In addition to its potential as a fruitful source for understanding the mathematical subtleties of the models from a new perspective, this analytical description also allows us to fully utilize the Newton-Raphson algorithm in numerical simulations, which by design should lead to reduced computational effort. By means of the confined cylinder benchmark we will show that a finite element discretization of these new equations delivers results of comparable accuracy to known methods.Comment: 21 pages, 5 figure

    Automatic implementation of material laws: Jacobian calculation in a finite element code with TAPENADE

    Full text link
    In an effort to increase the versatility of finite element codes, we explore the possibility of automatically creating the Jacobian matrix necessary for the gradient-based solution of nonlinear systems of equations. Particularly, we aim to assess the feasibility of employing the automatic differentiation tool TAPENADE for this purpose on a large Fortran codebase that is the result of many years of continuous development. As a starting point we will describe the special structure of finite element codes and the implications that this code design carries for an efficient calculation of the Jacobian matrix. We will also propose a first approach towards improving the efficiency of such a method. Finally, we will present a functioning method for the automatic implementation of the Jacobian calculation in a finite element software, but will also point out important shortcomings that will have to be addressed in the future.Comment: 17 pages, 9 figure

    Implementing the Singular Value Decomposition in the Helmholtz Analytics Toolkit HeAT

    Get PDF
    Singular value decomposition (SVD) is a fundamental tool in data science and often used as, e.g., a preprocessing step. When dealing with very large data sets as they often arise at the DLR, performing the analysis in a scalable way on HPC systems can be necessary; this also includes the computation of the SVD. In this talk we present work in progress regarding our implementation of a parallel SVD within the PyTorch- and mpi4py-based HPC-data analytics software HEAT (Helmholtz Analytics Toolkit) developed at DLR, JSC, and KIT (Götz et al., 2020 IEEE International Conference on Big Data, pp. 276-287)

    HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics

    Get PDF
    To cope with the rapid growth in available data, the efficiency of data analysis and machine learning libraries has recently received increased attention. Although great advancements have been made in traditional array-based computations, most are limited by the resources available on a single computation node. Consequently, novel approaches must be made to exploit distributed resources, e.g. distributed memory architectures. To this end, we introduce HeAT, an array-based numerical programming framework for large-scale parallel processing with an easy-to-use NumPy-like API. HeAT utilizes PyTorch as a node-local eager execution engine and distributes the workload on arbitrarily large high-performance computing systems via MPI. It provides both low-level array computations, as well as assorted higher-level algorithms. With HeAT, it is possible for a NumPy user to take full advantage of their available resources, significantly lowering the barrier to distributed data analysis. When compared to similar frameworks, HeAT achieves speedups of up to two orders of magnitude.Comment: 10 pages, 8 figures, 5 listings, 1 tabl

    HeAT – a Distributed and GPU-accelerated Tensor Framework for Data Analytics

    Get PDF
    In order to cope with the exponential growth in available data, the efficiency of data analysis and machine learning libraries have recently received increased attention. Although corresponding array-based numerical kernels have been significantly improved, most are limited by the resources available on a single computational node. Consequently, kernels must exploit distributed resources, e.g., distributed memory architectures. To this end, we introduce HeAT, an array-based numerical programming framework for large-scale parallel processing with an easy-to-use NumPy-like API. HeAT utilizes PyTorch as a node-local eager execution engine and distributes the workload via MPI on arbitrarily large high-performance computing systems. It provides both low-level array-based computations, as well as assorted higher-level algorithms. With HeAT, it is possible for a NumPy user to take advantage of their available resources, significantly lowering the barrier to distributed data analysis. Compared with applications written in similar frameworks, HeAT achieves speedups of up to two orders of magnitude

    The Helmholtz Analytics Toolkit (Heat) and its role in the landscape of massively-parallel scientific Python

    Get PDF
    When it comes to enhancing exploitation of massive data, machine learning methods are at the forefront of researchers’ awareness. Much less so is the need for, and the complexity of, applying these techniques efficiently across large-scale, memory-distributed data volumes. In fact, these aspects typical for the handling of massive data sets pose major challenges to the vast majority of research communities, in particular to those without a background in high-performance computing. Often, the standard approach involves breaking up and analyzing data in smaller chunks; this can be inefficient and prone to errors, and sometimes it might be inappropriate at all because the context of the overall data set can get lost. The Helmholtz Analytics Toolkit (Heat) library offers a solution to this problem by providing memory-distributed and hardware-accelerated array manipulation, data analytics, and machine learning algorithms in Python. The main objective is to make memory-intensive data analysis possible across various fields of research ---in particular for domain scientists being non-experts in traditional high-performance computing who nevertheless need to tackle data analytics problems going beyond the capabilities of a single workstation. The development of this interdisciplinary, general-purpose, and open-source scientific Python library started in 2018 and is based on collaboration of three institutions (German Aerospace Center DLR, Forschungszentrum Jülich FZJ, Karlsruhe Institute of Technology KIT) of the Helmholtz Association. The pillars of its development are... - ...to enable memory distribution of n-dimensional arrays, - to adopt PyTorch as process-local compute engine (hence supporting GPU-acceleration), - to provide memory-distributed (i.e., multi-node, multi-GPU) array operations and algorithms, optimizing asynchronous MPI-communication (based on mpi4py) under the hood, and - to wrap functionalities in NumPy- or scikit-learn-like API to achieve porting of existing applications with minimal changes and to enable the usage by non-experts in HPC. In this talk we will give an illustrative overview on the current features and capabilities of our library. Moreover, we will discuss its role in the existing ecosystem of distributed computing in Python, and we will address technical and operational challenges in further development
    • …
    corecore